60 research outputs found

    Strategies and mechanisms for electronic peer review

    Get PDF
    This journal article published at the October 2000 Frontiers in Education Conference discusses strategies and mechanisms for electronic peer review. It outlines a peer-grading system for review of student assignments over the World-Wide Web called Peer Grader. The system allows authors and reviewers to communicate and authors to update their submissions. This system facilitates collaborative learning and makes it possible to break up a large project into smaller portions. The article summarizes a unique and innovative method of peer-review. Educational levels: Graduate or professional

    Probing the Landscape: Toward a Systematic Taxonomy of Online Peer Assessment Systems in Education

    Get PDF
    We present the research framework for a taxonomy of online educational peer-assessment systems. This framework enables researchers in technology-supported peer assessment to understand the current landscape of technologies supporting student peer review and assessment, specifically, its affordances and constraints. The framework helps identify the major themes in existing and potential research and formulate an agenda for future studies. It also informs educators and system design practitioners about use cases and design options

    Toward Better Training in Peer Assessment: Does Calibration Help?

    Get PDF
    For peer assessments to be helpful, student reviewers need to submit reviews of good quality. This requires certain training or guidance from teaching staff, lest reviewers read each other\u27s work uncritically, and assign good scores but offer few suggestions. One approach to improving the review quality is calibration. Calibration refers to comparing students\u27 individual reviews to a standard—usually a review done by teaching staff on the same reviewed artifact. In this paper, we categorize two modes of calibration for peer assessment and discuss our experience with both of them in a pilot study with Expertiza system

    Assessing the Quality of Automatic Summarization for Peer Review in Education

    Get PDF
    ABSTRACT Technology supported peer review has drawn many interests from educators and researchers. It encourages active learning, provides timely feedback to students and multiple perspectives on their work. Currently, online peer review systems allow a student's work to be reviewed by a handful of their peers. While this is quite a good way to obtain a high degree of confidence, reading a large amount of feedback could be overwhelming. Our observation shows that the students even ignore some feedback when it gets too large. In this work, we try to automatically summarize the feedback by extracting the similar content that is mentioned by the reviewers, which would capture the strength and weaknesses of the work. We evaluate different auto summarization algorithms and length of the summary with educational peer review dataset, which was rated by a human. In general, the students found that medium-size generated summaries (5-10 sentences) encapsulate the context of the reviews, are able to convey the intent of the reviews, and help them to judge the quality of the work

    Grading by Experience Points: An Example from Computer Ethics

    No full text
    Courses are usually graded on percentages�a certain percentage is required for each letter grade. Students often see this as a negative, in which they can only lose points, not gain points, and put their average at risk with each new assignment. This contrasts with the world of online gaming, where they gain �experience points� from each new activity, and their score monotonically increases toward a desired goal. Courses, too, can be graded by experience points. Last fall, the author graded his Ethics in Computing class this way. Students earned points for a variety of activities, mainly performing ethical analyses related to computing, and participating in debates on ethics-related topics. The grading system served as an inducement to student involvement, with students eagerly signing up for analyses and investing considerable effort in debates. However, it seemed to motivate the students to focus more on quantity than quality of contributions

    Peer Review and Wiki Textbooks: The Good, the Bad, and the Unmeasurable

    No full text
    Over the past several years, we have had our students develop educational materials that can be used in future classes, such as student-authored wiki textbooks. Since the volume of writing is large, we don\u27t have enough time to review it ourselves, so we engage students in the process via peer review. Our results have been quite positive. In 2010-2011, by a margin of 78% to 6%, students were proud of their contributions to the wiki textbook. By 67% to 7%, they considered them credible entries for a college-level text. By 72% to 11%, students agreed that the reviews they received help them to improve their work. By 64% to 17%, the students found it easy to complete their peer reviews using our peer-review tool Expertiza. Each year we have noticed improvement, either in learning outcomes or student satisfaction. We will discuss factors that seem to have contributed to the improvement
    • …
    corecore